15 research outputs found

    Designing model-based intelligent dialogue systems

    No full text
    Intelligent Systems are served by Intelligent User Interfaces aimed to improve the efficiency, effectiveness and adaptation of the interaction between the user and the computer by representing, understanding and implementing models. The Intelligent User Interface Model (IUIM) helps to design and develop Intelligent Systems considering its architecture and its behavior. It focuses the Interaction and Dialogue between User and System at the heart of an Intelligent Interactive System. An architectural model, which defines the components of the model, and a conceptual model, which relates to its contents and behavior, compose the IUIM. The conceptual model defines three elements: an Adaptive User Model (including components for building and updating the user model), a Task Model (including general and domain specific knowledge) and an Adaptive Discourse Model (to be assisted by an intelligent help and a learning module). We will show the implementation of the model by describing an application named Stigma- A STereotypical Intelligent General Matching Agent for Improving Search Results on the Internet. Finally, we compared the new model with others, stating the differences and the advantages of the proposed model

    Don't Miss-R - Recommending Restaurants through an Adaptive Mobile System

    No full text
    The present study compares an adaptive simulated cellular-phone based recommender system to a non-adaptive one, in order to evaluate user preferences with respect to system adaptivity. The results show that users prefer the adaptive system over the non adaptive one even after minimal interaction with the system

    A first evaluation study of a database of kinetic facial expressions (DaFEx)

    No full text
    In this paper we present DaFEx (Database of Facial Expressions), a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embodied Conversational Agents (ECAs). DaFEx consists of 1008 short videos containing emotional facial expressions of the 6 Ekman`s emotions plus the neutral expression. The facial expressions were recorded by 8 professional actors (male and female) in two acting conditions (`utterance` and `no- utterance`) and at 3 intensity levels (high, medium, low). The properties of DaFEx were studied by having 80 subjects classify the emotion expressed in the videos. High rates of accuracy were obtained for most of the emotions displayed. We also tested the effect of the intensity level, of the articulatory movements due to speech, and of the actors` and subjects` gender, on classification accuracy. The results showed that decoding accuracy decreases with the intensity of emotions; that the presence of articulatory movements negatively affects the recognition of fear, surprise and of the neutral expression, while it improves the recognition of anger; and that facial expressions seem to be recognized (slightly) better when acted by actresses than by actors

    The Properties of DaFEx, a Database of Kinetic Facial Expressions

    No full text
    In this paper we present an evaluation study for DaFEx (Database of Facial Expressions), a database created with the purpose of providing a benchmark for the evaluation of the facial expressivity of Embodied Conversational Agents (ECAs). DaFEx consists of 1008 short videos containing emotional facial expressions of the 6 Ekman’s emotions plus the neutral expression. The facial expressions were recorded by 8 professional actors (male and female) in two acting conditions (“utterance” and “non utterance”) and at 3 intensity levels (high, medium, low). The properties of DaFEx were studied by having 80 subjects classify the emotion expressed in the videos. We tested the effect of the intensity level, of the articulatory movements due to speech, and of the actors’ and subjects’ gender, on classification accuracy. We also studied the way error distribute across confusion classes. The results are summarized in this work

    KNAVE-II: A Distributed Architecture for Interactive Visualization and

    No full text
    Interpretation and exploration of longitudinal clinical data is a major part of diagnosis, therapy, quality assessment, and clinical research, particularly for chronic patients. KNAVE-II is an intelligent interface to a distributed architecture specific to the tasks of query, knowledge-based interpretation, summarization, visualization, interactive exploration of large numbers of distributed time-oriented clinical data and dynamic sensitivity analysis of these data. The web-based architecture enables users (e.g., physicians) to query, visualize and explore clinical time-oriented databases. Both, the generation of context-sensitive interpretations (abstractions) of the time-stamped data, as well as the dynamic visual exploration of the raw data and the multiple levels of concepts abstracted from these data, are supported by runtime access to domain-specific knowledge bases, maintained by domain experts. KNAVE-II was designed according to a set of well-defined desiderata. The architecture enables exploration along both absolute (calendar-based) and relative (clinically meaningful) time-lines. The underlying architecture uses standardized vocabularies (such as a controlled dictionary for laboratory tests and physical observations), and predefined mappings to local data sources, for communication among its various components. Thus, the new framework enables users to access and explore multiple remote heterogeneous databases, without explicitly knowing thei

    I like it - An affective interface for a multimodal museum guide

    No full text
    The optimal multimedia tourist guide should support strong personalization of all the information provided in a museum in an effort to ensure that each visitor be allowed to accommodate and interpret the visit according to his own pace and interests. We claim that an interaction based on expressing affective attitude may improve usability of an interface in particular when, like in museums, the technology should not hinder the “real ” experience. In this paper, we discuss an affective interface based on the explicit signal of interest to guide the amount of details presented about the museum exhibits. We discuss an initial design, two user studies and a second design that is better understood by the user
    corecore